6 research outputs found

    Explaining the Black-box Smoothly- A Counterfactual Approach

    Full text link
    We propose a BlackBox \emph{Counterfactual Explainer} that is explicitly developed for medical imaging applications. Classical approaches (e.g. saliency maps) assessing feature importance do not explain \emph{how} and \emph{why} variations in a particular anatomical region is relevant to the outcome, which is crucial for transparent decision making in healthcare application. Our framework explains the outcome by gradually \emph{exaggerating} the semantic effect of the given outcome label. Given a query input to a classifier, Generative Adversarial Networks produce a progressive set of perturbations to the query image that gradually changes the posterior probability from its original class to its negation. We design the loss function to ensure that essential and potentially relevant details, such as support devices, are preserved in the counterfactually generated images. We provide an extensive evaluation of different classification tasks on the chest X-Ray images. Our experiments show that a counterfactually generated visual explanation is consistent with the disease's clinical relevant measurements, both quantitatively and qualitatively.Comment: Under review for IEEE-TMI journa

    Deep Learning for Medical Imaging From Diagnosis Prediction to its Explanation

    No full text
    Deep neural networks (DNN) have achieved unprecedented performance in computer-vision tasks almost ubiquitously in business, technology, and science. While substantial efforts are made to engineer highly accurate architectures and provide usable model explanations, most state-of-the-art approaches are first designed for natural vision and then translated to the medical domain. This dissertation seeks to address this gap by proposing novel architectures that integrate the domain-specific constraints of medical imaging into the DNN model and explanation design. Prior work on DNN design commonly performs lossy data manipulation to make volumetric data compatible with 2D or low-resolution 3D architectures. To this end, we proposed a novel DNN architecture that transforms volumetric medical imaging data of any resolution into a robust representation that is highly predictive of disease. For DNN model explanation, current explanation methods primarily focus on highlighting the essential regions (where) for the classification decisions. The location information alone is insufficient for applications in medical imaging. We designed counterfactual explanations to visually demonstrate how adding or removing image-features changes the DNN decision to be positive or negative for a diagnosis. Further, we reinforced the explanations by quantifying the causal relationship between neurons in DNN and relevant clinical concepts. These clinical concepts are derived from radiology reports and are corroborated by the clinicians to be useful in identifying the underlying diagnosis. In the medical domain, multiple conditions may have a similar visual appearance, and it's common to have images with conditions that are novel for the pre-trained DNN. DNN should refrain from making over-confident predictions on such data and mark them for a second reading. Our final work proposed a novel strategy to make any off-the-shelf DNN classifier adhere to this clinical requirement
    corecore